258 research outputs found
Artifact Restoration in Histology Images with Diffusion Probabilistic Models
Histological whole slide images (WSIs) can be usually compromised by
artifacts, such as tissue folding and bubbles, which will increase the
examination difficulty for both pathologists and Computer-Aided Diagnosis (CAD)
systems. Existing approaches to restoring artifact images are confined to
Generative Adversarial Networks (GANs), where the restoration process is
formulated as an image-to-image transfer. Those methods are prone to suffer
from mode collapse and unexpected mistransfer in the stain style, leading to
unsatisfied and unrealistic restored images. Innovatively, we make the first
attempt at a denoising diffusion probabilistic model for histological artifact
restoration, namely ArtiFusion.Specifically, ArtiFusion formulates the artifact
region restoration as a gradual denoising process, and its training relies
solely on artifact-free images to simplify the training complexity.Furthermore,
to capture local-global correlations in the regional artifact restoration, a
novel Swin-Transformer denoising architecture is designed, along with a time
token scheme. Our extensive evaluations demonstrate the effectiveness of
ArtiFusion as a pre-processing method for histology analysis, which can
successfully preserve the tissue structures and stain style in artifact-free
regions during the restoration. Code is available at
https://github.com/zhenqi-he/ArtiFusion.Comment: Accepted by MICCAI202
Generative Model Based Noise Robust Training for Unsupervised Domain Adaptation
Target domain pseudo-labelling has shown effectiveness in unsupervised domain
adaptation (UDA). However, pseudo-labels of unlabeled target domain data are
inevitably noisy due to the distribution shift between source and target
domains. This paper proposes a Generative model-based Noise-Robust Training
method (GeNRT), which eliminates domain shift while mitigating label noise.
GeNRT incorporates a Distribution-based Class-wise Feature Augmentation (D-CFA)
and a Generative-Discriminative classifier Consistency (GDC), both based on the
class-wise target distributions modelled by generative models. D-CFA minimizes
the domain gap by augmenting the source data with distribution-sampled target
features, and trains a noise-robust discriminative classifier by using target
domain knowledge from the generative models. GDC regards all the class-wise
generative models as generative classifiers and enforces a consistency
regularization between the generative and discriminative classifiers. It
exploits an ensemble of target knowledge from all the generative models to
train a noise-robust discriminative classifier and eventually gets
theoretically linked to the Ben-David domain adaptation theorem for reducing
the domain gap. Extensive experiments on Office-Home, PACS, and Digit-Five show
that our GeNRT achieves comparable performance to state-of-the-art methods
under single-source and multi-source UDA settings
Pick the Best Pre-trained Model: Towards Transferability Estimation for Medical Image Segmentation
Transfer learning is a critical technique in training deep neural networks
for the challenging medical image segmentation task that requires enormous
resources. With the abundance of medical image data, many research institutions
release models trained on various datasets that can form a huge pool of
candidate source models to choose from. Hence, it's vital to estimate the
source models' transferability (i.e., the ability to generalize across
different downstream tasks) for proper and efficient model reuse. To make up
for its deficiency when applying transfer learning to medical image
segmentation, in this paper, we therefore propose a new Transferability
Estimation (TE) method. We first analyze the drawbacks of using the existing TE
algorithms for medical image segmentation and then design a source-free TE
framework that considers both class consistency and feature variety for better
estimation. Extensive experiments show that our method surpasses all current
algorithms for transferability estimation in medical image segmentation. Code
is available at https://github.com/EndoluminalSurgicalVision-IMR/CCFVComment: MICCAI2023(Early Accepted
A Metabonomic Approach to Analyze the Dexamethasone-Induced Cleft Palate in Mice
Mice models are an important way to understand the relation between the fetus with cleft palate and changes of maternal biofluid. This paper aims to develop a metabonomics approach to analyze dexamethasone-induced cleft palate in pregnant C57BL/6J mice and to study the relationship between the change of endogenous small molecular metabolites in maternal plasma and the incidence of cleft palate. To do so, pregnant mice were randomly divided into two groups. The one group was injected with dexamethasone. On E17.5th day, the incident rates of cleft palate from embryos in two groups were calculated. The 1H-NMR spectra from the metabolites in plasma in two groups was collected at same time. Then the data were analyzed using metabonomics methods (PCA and SIMCA). The results showed that the data from the two groups displayed distinctive characters, and the incidence of cleft palate were significantly different (P < .005). To conclude, this study demonstrates that the metabonomics approach is a powerful and effective method in detecting the abnormal metabolites from mother in the earlier period of embryos, and supports the idea that a change from dexamethasone induced in maternal metabolites plays an important role in the incidence of cleft palate
STU-Net: Scalable and Transferable Medical Image Segmentation Models Empowered by Large-Scale Supervised Pre-training
Large-scale models pre-trained on large-scale datasets have profoundly
advanced the development of deep learning. However, the state-of-the-art models
for medical image segmentation are still small-scale, with their parameters
only in the tens of millions. Further scaling them up to higher orders of
magnitude is rarely explored. An overarching goal of exploring large-scale
models is to train them on large-scale medical segmentation datasets for better
transfer capacities. In this work, we design a series of Scalable and
Transferable U-Net (STU-Net) models, with parameter sizes ranging from 14
million to 1.4 billion. Notably, the 1.4B STU-Net is the largest medical image
segmentation model to date. Our STU-Net is based on nnU-Net framework due to
its popularity and impressive performance. We first refine the default
convolutional blocks in nnU-Net to make them scalable. Then, we empirically
evaluate different scaling combinations of network depth and width, discovering
that it is optimal to scale model depth and width together. We train our
scalable STU-Net models on a large-scale TotalSegmentator dataset and find that
increasing model size brings a stronger performance gain. This observation
reveals that a large model is promising in medical image segmentation.
Furthermore, we evaluate the transferability of our model on 14 downstream
datasets for direct inference and 3 datasets for further fine-tuning, covering
various modalities and segmentation targets. We observe good performance of our
pre-trained model in both direct inference and fine-tuning. The code and
pre-trained models are available at https://github.com/Ziyan-Huang/STU-Net
- …